perm filename ARTIFI.3[W78,JMC] blob sn#353800 filedate 1978-05-09 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.DEVICE XGP
C00005 00003	.once verbatim
C00031 ENDMK
C⊗;
.DEVICE XGP;
.PAGE FRAME 54 HIGH 60 WIDE;
.evenleftborder ← oddleftborder ← 1000;
.area text lines 4 to 54;
.title area heading lines 1 to 3
.double space
.indent 4,0,0
.place text;
.sname←null;
.nojust
.
.TURN ON "α{∂%"; turn on "_" for "#" ;
.turn on "α∂%"
.FONT 1 "sail25";
.FONT 2 "BAXM30";
.FONT 3 "baxb30";
.FONT 4 "SUB";
.FONT 5 "SUP";
.FONT 6 "BASL35";
.FONT 7 "NGR25";
.FONT 8 "MATH30";
.FONT 9 "FIX25";
.FONT A "GRK30";
.font B "zero30"
.FONT C "BAXI30";
.FONT D "MS25";
.at "⊗"word" "	⊂ ("%2word%* ") ⊃;
.COUNT ITEM
.COUNT SUBITEM IN ITEM
.COUNT NOTE inline
.<<
.AT "~" TEXT "~"; ⊂
.SEND NOTES ⊂
.{NOTE!}. TEXT
.
.⊃;
.AT "$" ⊂ NEXT NOTE; ("%5"&NOTE!&"%*");⊃;
.>>
.AT "#" ⊂NEXT ITEM;(ITEM!);⊃;
.AT "&" ⊂NEXT SUBITEM;ONCE INDENT 10;(ITEM!&"."&SUBITEM!);⊃;
.macro cb(head) ⊂ if lines<7 then next page else skip; once center; select 3
head

.⊃
.macro bb(head) ⊂ if lines<7 then next page else skip; once select 3
head

.⊃

.count eq inline  from 1 to 999;
.at "!!" lab " " ⊂
.lab next eq
.eq})∂8{
.⊃
.<<formulas are signalled by !! followed by an optional identifier followed
. by colon followed by the formula.  A reference to formula nn is written {eq nn}
.!!e1:	x=x
.!!e2:	y=y
.
.Equation {eq e2} comes after {eq e1}.
.>>
.at "!a" txt ";"	⊂
.("txt"[1]&"∩[%6"&"txt"[2]&"]&∪[ "&"txt"[3]&"%*]");
.  ⊃
.at "!j" ⊂"%2j∪[%60%2]&''"⊃;

.AT 8 ⊂ONCE INDENT 5;⊃
.at "qat" ⊂"%3at%*"⊃
.at "qNIL" ⊂"%1NIL%*"⊃
.at "qnil" ⊂"%1NIL%*"⊃
.at "qaa" ⊂"%3aa%*"⊃
.at "qad" ⊂"%3ad%*"⊃
.at "qeq" ⊂"%3eq%*"⊃
.at "qif" ⊂"%3if%*"⊃
.at "qthen" ⊂"%3then%*"⊃
.at "qelse" ⊂"%3else%*"⊃
.at "qda" ⊂"%3da%*"⊃
.at "qdd" ⊂"%3dd%*"⊃
.at "qa" ⊂"%3a%*"⊃
.at "qn" ⊂"%3n%*"⊃
.at "qd" ⊂"%3d%*"⊃
.at "qb" ⊂"%8T%*"⊃
.at "qw" ⊂"%Aw%*"⊃
.at "qx" ⊂"%8x%*"⊃
.at "qg" ⊂"%Ag%*"⊃
.once verbatim
ARTIFICIAL INTELLIGENCE (AI),
the branch of computer science concerned with making machines behave intelligently.
Ultimately, AI researchers hope to understand intelligence well enough to make
computer programs with human-level or higher general intelligence.
Programs now exist that play chess and other games at expert
level, recognize connected speech using a limited vocabulary, answer
questions about stories, prove theorems in certain branches of
mathematics, and recognize objects in scenes using a television camera.
However, these
programs have limited ability, no creativity, and each works in its own
limited domain.
.<<It seemed necessary to make the list of accomplishments more precise
.and modest>>

	AI research has two aspects: The theoretical part involves discovering
what intellectual mechanisms exist and how they interact in achieving goals.
The experimental part consists of writing computer programs or building machines
to solve particular problems or behave intelligently in particular kinds of
situations.  Sometimes the experimental programs have practical goals, but
more often their purpose is to verify that certain methods will achieve
certain goals.
.<<The draft suggests that each problem is solved in two phases.  The point
.is rather that the subject has its abstract and concrete sides.  Also
.the idea that a program is a theory is one I don't want to state, because
.I don't agree with it.>>

	Intelligent machines are old in mythology, science
fiction and swindling.  In the early 1900s, the Spanish inventor Torres y
Quevedo made a machine that could checkmate its opponent with a rook and
king against a king, and speculated intelligently about possible
intelligent machines.  However, systematic work on
AI began only after the invention of
digital computers.  The
first serious scientific article on AI was written by
the British mathematician Alan Turing in 1950, and
the first full time research group was founded in 1954
at Carnegie-Mellon University by Allen Newell and
Herbert Simon.

	The main areas of AI research, and some of the techniques used,
are described in what follows.

SEARCH.  When a computer must act, it usually has several
choices.  Each choice may lead to a number of different consequences, depending
on nature or on the actions of an opponent in a game,
and each consequence may result in new possible actions, and so on.

	The main problem in searching through
the "tree of possibilities" is the so-called "combinatorial explosion".
If there are 10 possibilities at each level of search, then 10 levels of
search lead to 10 billion possibilities, and this is infeasible even with
fast computers.  Therefore, the programmer must devise %2heuristics%1 that
allow the program to reject most of the alternatives, even at the risk
of sometimes missing the best answer.
Thus when there isn't time to evaluate each line of play to the end,
the program must decide when it can stop searching and approximately
evaluate the postion.

	The performance of computer chess programs is a good measure of
progress in this part of AI.  The chess programs of the 1950's played
much worse than the people who wrote them, because the programmers, some
of whom were master level players, didn't understand their own
mental processes well enough to discover the methods they themselves used
to limit search.
Since then, enough of the ways in which humans limit search have been
discovered so that in 1977, a program written by David Slate and Lawrence
Atkins won the Minnesota Open Tournament with a master-level performance.
(It lost the subsequent tournament for the state championship).  However, this
performance depended on a very fast computer looking at hundreds of
thousands of positions, so it is clear that many of the human ways of
limiting search remain to be understood.

	For example, one early program written by a chess master
looked at the seven most
"plausible" moves and the seven plausible replies to each of these
and seven replies to each of these and seven to each of these for
a total of 7%8x%17%8x%17%8x%17 = 2401 final positions.
Most of these positions were examined unnecessarily.
When a move has been shown to be worse than a previously examined
move of the same player by finding one reply that refutes it,
there is no need to look for refuting moves.  This observation
(illustrated in the figure)
has been elaborated into a method called the alpha-beta heuristic
for reducing search and is used by all modern game playing programs.  The
joke is that it is also used by all human players, modern or not, and
whether they realize it or not.
	

GOALS AND SUBGOALS.  Attaining a goal often requires finding
a sequence of actions on the basis of information about the
effects of, and about the preconditions for, successful performance of certain
actions.  A simple example is building a tower
of blocks where a precondition for placing one block on another is that both
the block to be moved and the place where it goes should be clear.  Thus moving
one block may require moving other blocks first.
Gerald Sussman's program for designing electronic circuits required more
elaborate treatment of how doing one action affects the preconditions for
others.

KNOWLEDGE REPRESENTATION.  Many of the difficulties in making machines perform tasks
turn on the question of deciding what information the program should have,
how further conclusions can be drawn from initial information, and how
the information should be stored in the computer.  These difficulties have
led to research on the subject of what is knowledge.  Many of the questions
asked are those studied by philosophers under the name of %2epistemology%1 -
the theory of knowledge.  Mathematical logic has provided powerful means of 
representing facts in computers and powerful modes of reasoning.  However, it
has turned out that not all modes of reasoning performed by humans and needed
for problem solving are represented in present systems of mathematical logic.
Logic is excellent for those methods of reasoning that never lead to
error from true premises, but intelligence also requires methods of reasoning
that generate conjectures that are sometimes false.
The work of John McCarthy at Stanford
University has shown the importance and difficulty of representing facts
about situations in which new actions may start while others are in progress.
Knowledge about knowledge has also been studied.  A computer program that
can plan a trip must know that it cannot find a flight to New York by thinking
about it, but travel agents know about airplane flights, and their telephone
numbers are in the Yellow Pages.

AI and the Everyday World

Besides dealing with purely symbolic problems such as chess and mathematics,
we want intelligent machines that can see, hear, control motion, and make
things.

PATTERN MATCHING.  Programs have been written that find objects by tracing outlines
of color or brightness changes, or by growing regions of a single color or texture.
Such tasks require a computer to store patterns and to recognize objects by matching
patterns.  A computer might store its idea of a dog as a collection
of legs, tail, etc., each of a given shape and connected in the right way.
It can try to find dogs in a picture by matching the dog pattern with parts
of the picture.  The program must compute how a dog will appear with its legs
in some position when looked at from some angle and partly hidden by other 
objects.  Actually, in order to recognize dogs it must do this process backwards,
deciding what kind of dog would lead to the appearance it sees.

	General principles of pattern description and techniques for
pattern matching apply to recognizing dogs, to recognizing chess
configurations, and to many other problems.

Patterns of action called frames have been studied by Marvin L. Minsky and
his students at Massachusetts Institute of Technology.  A typical frame is the
event of visiting a restaurant; subframes would consist of being seated,
ordering, waiting, eating, paying the bill, and leaving.  Such frames have been
used by Roger C. Schank at Yale University in programs that answer questions
about stories, and also fill in information omitted from a story, because
it is implicit in the frame.

UNDERSTANDING LANGUAGE.  Programs that recognize and produce human speech have
been written.  Some such programs recognize hundreds of isolated words; some
can handle connected speech if it is not too difficult.

	One idea for making an intelligent machine is to first make it
capable of understanding English, and then to let it read textbooks,
encyclopedias, and scientific articles.  There are computer programs that
can read stories from first grade readers and answer some questions about
them.  Other programs can converse with physicians about bacterial
diseases of the blood.  All present programs, however, are quite limited
in the subject matter they can understand.  The difficulty is not one of
vocabulary - whole dictionaries are available on tape - but rather that
understanding encyclopedia articles
requires more initial knowledge and ability to reason than
can now be programmed.

	In the 1950's programs were written to translate from one language
to another.  They weren't very good, and it was some years before everyone
was convinced that successful translation requires programs that
"understand" the material being translated.  Efforts are now being made to
give computers an increasingly better understanding of larger and larger
fragments of natural language.  This "understanding" is tested by the
performance of question-answering programs that answer questions about a
text on the basis of information in the text and the common sense
knowledge possessed by the program.

	One program combined English dialogue and problem solving to
answer questions and perform requested actions in a simulated "blocks
world".  Devised by Terry Winograd, SHRDLU could be requested, "Pick up
the red pyramid and put it on the green block."  SHRDLU would figure out
that "it" referred to the red pyramid.  It would clear off the green block
if necessary, and would ask, "Which green block?" if there was more than
one.

LEARNING FROM EXPERIENCE.  Humans and animals learn from experience, and machines
should do likewise.  An early example was a program by Arthur Samuel for playing
checkers.  The behavior of the program was determined by certain numbers that
determined how it evaluated positions; for example, the relative values of a
king and a single man.  The program would read books of games played by master
players and adjust the numbers until they predicted as many as possible of the
moves regarded as good by the experts.  Combined with search, this makes
an excellent program.  A version of it, without the learning, is sold
for playing through a television set.

	Patrick Winston 
of Massachusetts Institute of Technology wrote a program that
learns to classify objects
according to the presence of subobjects related in a specified way.  In Winston's
program, an arch is recognized as consisting of three objects one of which is
supported by the other two which are not touching each other.  An arcade is
learned as a line of arches arranged so that there is a path under all of them.

	To teach someone to play tic-tac-toe or solve an algebra problem,
it is enough to tell him the rules and rely on his intelligence to apply
them.  The ability
of a program to learn from experience depends on how its own behavior is
represented within the machine.  If the program is to learn efficiently,
then what a human would regard as simple changes in behavior must be
represented by small changes in the way the behavior is represented.  With
present computer programs, in order to change a program's behavior, one
must understand the program thoroughly and make changes in a number of
places.  It is as though education were accomplished by brain surgery.  A
major research goal of AI is to develop programs with common sense, able
to combine instructions with their own knowledge.
Making a computer that can learn well therefore requires progress in
knowledge and its representation.

INDUSTRIAL ROBOTS.  Robots that do strenuous or dangerous tasks are in
increasing industrial use.  The earliest were programmed to repeat  the same
sequence of actions, such as putting an object found in a fixed location in a
punch press or a furnace, and removing it later.  Even such  crude robots are
more
flexible than building a special machine for each job and scrapping it when
the job changes.  Recent industrial robots have programs
whose actions depend on sensing the environment, and some even use
television cameras to locate objects.
General purpose assembly robots and languages for programming them are now
being developed at Stanford University, General Motors and elsewhere.
Another program drives a vehicle so as to avoid
obstacles.

EXPERT PROGRAMS.  Edward A. Feigenbaum and Joshua Lederberg at Stanford
University pioneered the development of programs that embody the knowledge
of an expert in some field.  Such programs are developed by interviewing
experts and getting them to help improve further versions of a program.
DENDRAL is expert in determining the structure of an organic compound from
mass spectrograph observations, and MYCIN helps a doctor diagnose bacterial 
infections of the blood, recommending tests and treatment, and recommending
further tests and treatment based on the results of the first.  MYCIN is
intended only to make suggestions; the doctor still must understand the
reasons for everything he does.

INFORMATION PROCESSING PSYCHOLOGY.  Artificial intelligence
and the study of human intelligence are closely related.  The information
processing approach to psychology is mainly replacing behaviorism, which was
chiefly concerned with finding direct relations between stimuli received by
organisms and their responses.  The information processing approach, initiated
by Allen Newell and Herbert Simon in the 1950's writes computer programs that
match complex problem-solving behavior.  Unlike stimulus-response theories,
information processing theories do not postulate direct relations between
inputs and outputs, because the internal processes can be very complex.  Both
artificial intelligence and information processing psychology must determine
what intellectual mechanisms are required to solve different kinds of problems.

THE STUDY OF AI.  Artificial intelligence is a young and difficult branch of
science.  Studying it requires the ability to program a computer, especially
using programming languages such as LISP (list processing), which is popular
in AI research.  Also important is the study of the theory of the correctness
of computer programs, and complexity theory - the study of how much computing
is required to solve various kinds of problems.  Some background in mathematical
logic is also necessary.

	In AI, there is less to learn than in physics or mathematics in
order to reach the frontier of the subject.  Much of what the student
learns is controversial; some of it will probably be shown wrong.  Besides
connections with psychology, artificial intelligence needs facts from and
contributes to mathematical logic, linguistics, and the physiology of the
nervous system.  Finally, AI studies many questions that are also studied
by philosophers from a different point of view.

	The ultimate goal of AI is to understand intelligence well enough
to make computers more intelligent than people.  Some scientists do not
think this possible, while others think they are close to success.  Most
would agree that while present methods will lead to further
accomplishments, a breakthrough in giving programs a general view of the
world is required before human-level intelligence is achieved.

John McCarthy